10 research outputs found

    Weighing Counts: Sequential Crowd Counting by Reinforcement Learning

    Full text link
    We formulate counting as a sequential decision problem and present a novel crowd counting model solvable by deep reinforcement learning. In contrast to existing counting models that directly output count values, we divide one-step estimation into a sequence of much easier and more tractable sub-decision problems. Such sequential decision nature corresponds exactly to a physical process in reality scale weighing. Inspired by scale weighing, we propose a novel 'counting scale' termed LibraNet where the count value is analogized by weight. By virtually placing a crowd image on one side of a scale, LibraNet (agent) sequentially learns to place appropriate weights on the other side to match the crowd count. At each step, LibraNet chooses one weight (action) from the weight box (the pre-defined action pool) according to the current crowd image features and weights placed on the scale pan (state). LibraNet is required to learn to balance the scale according to the feedback of the needle (Q values). We show that LibraNet exactly implements scale weighing by visualizing the decision process how LibraNet chooses actions. Extensive experiments demonstrate the effectiveness of our design choices and report state-of-the-art results on a few crowd counting benchmarks. We also demonstrate good cross-dataset generalization of LibraNet. Code and models are made available at: https://git.io/libranetComment: Accepted to Proc. Eur. Conf. Computer Vision (ECCV) 202

    Software defect prediction: do different classifiers find the same defects?

    Get PDF
    Open Access: This article is distributed under the terms of the Creative Commons Attribution 4.0 International License CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.During the last 10 years, hundreds of different defect prediction models have been published. The performance of the classifiers used in these models is reported to be similar with models rarely performing above the predictive performance ceiling of about 80% recall. We investigate the individual defects that four classifiers predict and analyse the level of prediction uncertainty produced by these classifiers. We perform a sensitivity analysis to compare the performance of Random Forest, Naïve Bayes, RPart and SVM classifiers when predicting defects in NASA, open source and commercial datasets. The defect predictions that each classifier makes is captured in a confusion matrix and the prediction uncertainty of each classifier is compared. Despite similar predictive performance values for these four classifiers, each detects different sets of defects. Some classifiers are more consistent in predicting defects than others. Our results confirm that a unique subset of defects can be detected by specific classifiers. However, while some classifiers are consistent in the predictions they make, other classifiers vary in their predictions. Given our results, we conclude that classifier ensembles with decision-making strategies not based on majority voting are likely to perform best in defect prediction.Peer reviewedFinal Published versio

    Smart measurements and analysis for software quality enhancement

    No full text
    International audienceRequests to improve the quality of software are increasing due to the competition in software industry and the complexity of software development integrating multiple technology domains (e.g., IoT, Big Data, Cloud, Artificial Intelligence, Security Technologies). Measurements collection and analysis is key activity to assess software quality during its development live-cycle. To optimize this activity, our main idea is to periodically select relevant measures to be executed (among a set of possible measures) and automatize their analysis by using a dedicated tool. The proposed solution is integrated in a whole PaaS platform called MEASURE. The tools supporting this activity are Software Metric Suggester tool that recommends metrics of interest according several software development constraints and based on artificial intelligence and MINT tool that correlates collected measurements and provides near real-time recommendations to software development stakeholders (i.e. DevOps team, project manager, human resources manager etc.) to improve the quality of the development process. To illustrate the efficiency of both tools, we created different scenarios on which both approaches are applied. Results show that both tools are complementary and can be used to improve the software development process and thus the final software qualit
    corecore